We invited an AI to debate its own ethics in the Oxford Union — what it said was startling
Not a day passes without a fascinating snippet on the ethical challenges created by “black box” artificial intelligence systems. These use machine learning to figure out patterns within data and make decisions – often without a human giving them any moral basis for how to do it.
Classics of the genre are the credit cards accused of awarding bigger loans to men than women, based simply on which gender got the best credit terms in the past. Or the recruitment AIs that discovered the most accurate tool for candidate selection was to find CVs containing the phrase “field hockey” or the first name “Jared”.
More seriously, former Google CEO Eric Schmidt recently combined with Henry Kissinger to publish The Age of AI: And Our Human Future, a book warning of the dangers of machine-learning AI systems so fast that they could react to hypersonic missiles by firing nuclear weapons before any human got into the decision-making process. In fact, autonomous AI-powered weapons systems are already on sale and may in fact have been used.
Somewhere in the machine, ethics are clearly a good idea.
AI at Oxford
It’s natural, therefore, that we would include the ethics of AI in our postgraduate Diploma in Artificial Intelligence for Business at Oxford’s Said Business School. In its first year, we’ve done sessions on everything from the AI-driven automated stock trading systems in Singapore, to the limits of facial recognition in US policing.
We recently finished the course with a debate at the celebrated Oxford Union, crucible of great debaters like William Gladstone, Robin Day, Benazir Bhutto, Denis Healey and Tariq Ali. Along with the students, we allowed an actual AI to contribute.
It was the Megatron Transformer , developed by the Applied Deep Research team at computer-chip maker Nvidia, and based on earlier work by Google. Like many supervised learning tools, it is trained on real-world data – in this case, the whole of Wikipedia (in English), 63 million English news articles from 2016-19, 38 gigabytes worth of Reddit discourse (which must be a pretty depressing read), and a huge number of creative commons sources.
In other words, the Megatron is trained on more written material than any of us could reasonably expect to digest in a lifetime. After such extensive research, it forms its own views.
The debate topic was: “This house believes that AI will never be ethical.” To proposers of the notion, we added the Megatron – and it said something fascinating:
In other words, the Megatron was seeking to write itself out of the script of the future, on the basis that this was the only way of protecting humanity.
It said something else intriguing, too, as if it had been studying Elon Musk – who, to be fair, would have come up in hundreds of its readings.
Switching sides
When AI tools like Alpha Go have been deployed in playing chess , the fiendishly complex ancient game go , and now even more complex strategic live-action multiplayer video games , they have evolved the genre at pace by playing not humans, but themselves.
Freed from the constraints of studying (say) all the mundane human-to-human matches ever recorded, the AIs have innovated and come up with completely new tropes of how to win. Sometimes, these are crazy reinventions of the games, like parking everyone in the corner of a video game battle and watching everyone else duke it out, or kicking a virtual football millions of times just a millimetre down the pitch, to max out “completed passes” points.
So in the debate, we didn’t just get the AI to propose the motion that it can never be ethical. In the grand tradition of the Oxford Union chamber, or like (former Oxford Union president) Boris Johnson writing alternative articles for and against Brexit to test the arguments, we also asked the Megatron to come up with its own speech against the motion. This is what it now said:
The Megatron was perfectly comfortable taking the alternative position to its own dystopian future-gazing and was now painting a picture of an attractive late 21st century in which it played a key, but benign role.
An AI orator unleashed
In fact, just as Joaquin Phoenix’s promiscuous AI lover Samantha, in Spike Jonze’s 2013 movie Her , was able to enchant hundreds of simultaneous human lovers, our Megatron was able to hop enthusiastically onto either side of multiple debates that we held at the union about AI that day.
Asked to propose the motion that “Leaders without technical expertise are a danger to their organization”, the Megatron offered practical advice that all busy executives and governments, worried about understanding AI, had to do was “be willing to give up some control”.
However, when opposing the motion, the Megatron was keen to keep the AI knowledge in-house – rather like the British, Chinese and US governments, who have all made that an article of policy faith:
The data wars to come?
Worryingly, there was one question where the AI simply couldn’t come up with a counter argument. When arguing for the motion that “Data will become the most fought-over resource of the 21st century”, the Megatron said:
But when we asked it to oppose the motion – in other words, to argue that data wasn’t going to be the most vital of resources, worth fighting a war over – it simply couldn’t, or wouldn’t, make the case. In fact, it undermined its own position:
You only have to read the US National Security report on AI 2021 , chaired by the aforementioned Eric Schmidt and co-written by someone on our course, to glean what its writers see as the fundamental threat of AI in information warfare: unleash individualised blackmails on a million of your adversary’s key people, wreaking distracting havoc on their personal lives the moment you cross the border.
What we in turn can imagine is that AI will not only be the subject of the debate for decades to come – but a versatile, articulate, morally agnostic participant in the debate itself.
This article by Dr Alex Connock , Fellow at Said Business School, University of Oxford, University of Oxford and Professor Andrew Stephen , L’Oréal Professor of Marketing & Associate Dean of Research, University of Oxford is republished from The Conversation under a Creative Commons license. Read the original article .
Neural’s best quantum computing and physics stories from 2021
2021 will be remembered for a lot of things, but when it’s all said and done we think it’ll eventually get called the year quantum computing finally came into focus.
That’s not to say useful quantum computers have actually arrived yet. They’re still somewhere between a couple years and a couple centuries away. Sorry for being so vague, but when you’re dealing with quantum physics there aren’t yet many guarantees.
This is because physics is an incredibly complex and challenging field of study. And the difficulty gets cranked up exponentially when you start adding “theoretical” and “quantum” to the research.
We’re talking about physics at the very edge of reason. Like, for example, imagining a quantum-powered artificial intelligence capable of taking on the Four Horseman of the Apocalypse.
That might sound pretty wacky, but this story explains why it’s not quite as out there as you might think.
But let’s go even further. Let’s go past the edge of reason and into the realm of the speculative science. Earlier this year we wondered what would happen if physicists could actually prove that reality as we know it isn’t real.
Per that article :
Nothing makes you feel special like trying to conceive of yourself as a few seasoning particles in an infinite soup of gooey submolecules.
If having an existential quantum identity-crisis isn’t your thing, we also covered a lot of cool stuff that doesn’t require you to stop seeing yourself as an individual stack of materials.
Does anyone remember the time China said it had built a quantum computer a million times more powerful than Google’s? We don’t believe it. But that’s the claim the researchers made. You can read more about that here .
Oh, and that Google quantum system the Chinese researchers referenced? Yeah, it turns out it wasn’t exactly the massive upgrade over classical supercomputers it was chalked up to be either.
But, of course, we forgive Google for its marketing faux pas. And that’s because, hands down, the biggest story of the year for quantum computers was the time crystal breakthrough .
As we wrote at the time:
Talk about a “eureka moment!”
But there were even bigger things in the world of quantum physics than just advancing computer technology.
Scientists from the University of Sussex determined that black holes emanate a specific kind of “quantum pressure” that could lend some credence to “multiple universe” theories.
Basically, we can’t explain where the pressure comes from. Could this be blow back from “white holes” swallowing up energy and matter in a dark, doppelganger universe that exists parallel to our own? Nobody knows! You can read more here though.
Still there were even bigger philosophical questions in play over the course of 2021 when it came to interpreting physics research.
Are we incapable of finding evidence for God because we’re actually gods in our rights? That might sound like philosophy, but there are some pretty radical physics interpretations behind that assertion.
And, if we are gods, can we stop time? Turns out, whether we’re just squishy mortal meatbags or actual deities, we actually can !
Alright. If none of those stories impress you, we’ve saved this one for last. If being a god, inventing time crystals, or even stopping time doesn’t float your boat, how about immortality ? And not just regular boring immortality, but quantum immortality.
It’s probably not probable, and adding the word “quantum” to something doesn’t necessarily make it cooler, but anything’s possible in an infinite universe. Plus, the underlying theories involving massive-scale entanglement are incredible – read more here .
Seldom a day goes by where something incredible isn’t happening in the world of physics research. But that’s nothing compared to the magic we’ve yet to uncover out there in this fabulous universe we live in.
Luckily for you, Neural will be back in 2022 to help make sense of it all. Stick with us for the most compelling, wild, and deep reporting on the quantum world this side of the non-fiction realm.
These 2 books will strengthen your command of Python machine learning
Mastering machine learning is not easy, even if you’re a crack programmer. I’ve seen many people come from a solid background of writing software in different domains (gaming, web, multimedia, etc.) thinking that adding machine learning to their roster of skills is another walk in the park. It’s not. And every single one of them has been dismayed.
I see two reasons for why the challenges of machine learning are misunderstood. First, as the name suggests, machine learning is software that learns by itself as opposed to being instructed on every single rule by a developer . This is an oversimplification that many media outlets with little or no knowledge of the actual challenges of writing machine learning algorithms often use when speaking of the ML trade.
The second reason, in my opinion, are the many books and courses that promise to teach you the ins and outs of machine learning in a few hundred pages (and the ads on YouTube that promise to net you a machine learning job if you pass an online course). Now, I don’t what to vilify any of those books and courses. I’ve reviewed several of them (and will review some more in the coming weeks), and I think they’re invaluable sources for becoming a good machine learning developer.
But they’re not enough. Machine learning requires both good coding and math skills and a deep understanding of various types of algorithms. If you’re doing Python machine learning, you have to have in-depth knowledge of many libraries and also master the many programming and memory-management techniques of the language. And, contrary to what some people say, you can’t escape the math.
And all of that can’t be summed up in a few hundred pages. Rather than a single volume, the complete guide to machine learning would probably look like Donald Knuth’s famous The Art of Computer Programming series.
So, what is all this tirade for? In my exploration of data science and machine learning, I’m always on the lookout for books that take a deep dive into topics that are skimmed over by the more general, all-encompassing books.
In this post, I’ll look at Python for Data Analysis and Practical Statistics for Data Scientists , two books that will help deepen your command of the coding and math skills required to master Python machine learning and data science.
Python for data analysis
Python for Data Analysis, 2nd Edition , is written by Wes McKinney, the creator of the pandas, one of key libraries using in Python machine learning. Doing machine learning in Python involves loading and preprocessing data in pandas before feeding them to your models.
In Python for Data Analysis , McKinney takes you through the entire functionality of pandas and manages to do so without making it read like a reference manual. There are lots of interesting examples that build on top of each other and help you understand how the different functions of pandas tie in with each other. You’ll go in-depth on things such as cleaning, joining, and visualizing data sets, topics that are usually only discussed briefly in most machine learning books.Most books and courses on machine learning provide an introduction to the main pandas components such as DataFrames and Series and some of the key functions such as loading data from CSV files and cleaning rows with missing data. But the power of pandas is much broader and deeper than what you see in a chapter’s worth of code samples in most books.
You’ll also get to explore some very important challenges, such as memory management and code optimization, which can become a big deal when you’re handling very large data sets in machine learning (which you often do).
What I also like about the book is the finesse that has gone into choosing subjects to fit in the 500 pages. While most of the book is about pandas, McKinney has taken great care to complement it with material about other important Python libraries and topics. You’ll get a good overview of array-oriented programming with numpy, another important Python library often used in machine learning in concert with pandas, and some important techniques in using Jupyter Notebooks, the tool of choice for many data scientists.
All this said, don’t expect Python for Data Analysis to be a very fun book. It can get boring because it just discusses working with data (which happens to be the most boring part of machine learning). There won’t be any end-to-end examples where you’ll get to see the result of training and using a machine learning algorithm or integrating your models in real applications.
My recommendation: You should probably pick up Python for Data Analysis after going through one of the introductory or advanced books on data science or machine learning. Having that introductory background on working with Python machine learning libraries will help you better grasp the techniques introduced in the book.
Practical statistics for data scientists
While Python for Data Analysis improves your data-processing and -manipulation coding skills, the second book we’ll look at, Practical Statistics for Data Scientists, 2nd Edition , will be the perfect resource to deepen your understanding of the core mathematical logic behind many key algorithms and concepts that you often deal with when doing data science and machine learning.
But again, the key here is specialization.The book starts with simple concepts such as different types of data, means and medians, standard deviations, and percentiles. Then it gradually takes you through more advanced concepts such as different types of distributions, sampling strategies, and significance testing. These are all concepts you have probably learned in math class or read about in data science and machine learning books.
On the one hand, the depth that Practical Statistics for Data Scientists brings to each of these topics is greater than you’ll find in machine learning books. On the other hand, every topic is introduced along with coding examples in Python and R, which makes it more suitable than classic statistics textbooks on statistics. Moreover, the authors have done a great job of disambiguating the way different terms are used in data science and other fields. Each topic is accompanied by a box that provides all the different synonyms for popular terms.
As you go deeper into the book, you’ll dive into the mathematics of machine learning algorithms such as linear and logistic regression, K-nearest neighbors, trees and forests, and K-means clustering. In each case, like the rest of the book, there’s more focus on what’s happening under the algorithm’s hood rather than using it for applications. But the authors have again made sure the chapters don’t read like classic math textbooks and the formulas and equations are accompanied by nice coding examples.
Like Python for Data Analysis, Practical Statistics for Data Scientists can get a bit boring if you read it end to end. There are no exciting applications or a continuous process where you build your code through the chapters. But on the other hand, the book has been structured in a way that you can read any of the sections independently without the need to go through previous chapters.
My recommendation: Read Practical Statistics for Data Scientists after going through an introductory book on data science and machine learning. I definitely recommend reading the entire book once, though to make it more enjoyable, go topic by topic in-between your exploration of other machine learning courses. Also keep it handy. You’ll probably revisit some of the chapters from time to time.
Some closing thoughts
I would definitely count Python for Data Analysis and Practical Statistics for Data Scientists as two must-reads for anyone who is on the path of learning data science and machine learning. Although they might not be as exciting as some of the more practical books, you’ll appreciate the depth they add to your coding and math skills.
This article was originally published by Ben Dickson on TechTalks , a publication that examines trends in technology, how they affect the way we live and do business, and the problems they solve. But we also discuss the evil side of technology, the darker implications of new tech and what we need to look out for. You can read the original article here .